Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Autism spectrum disorder (ASD) is a neurodevelopmental condition marked by notable challenges in cognitive function, understanding language, recognizing objects, interacting with others, and communicating effectively. Its origins are mainly genetic, and identifying it early and intervening promptly can reduce the necessity for extensive medical treatments and lengthy diagnostic procedures for those impacted by ASD. This research is designed with two types of experimentation for ASD analysis. In the first set of experiments, authors utilized three feature engineering techniques (Chi-square, backward feature elimination, and PCA) with multiple machine learning models for autism presence prediction in toddlers. The proposed XGBoost 2.0 obtained 99% accuracy, F1 score, and recall with 98% precision with chi-square significant features. In the second scenario, main focus shifts to identifying tailored educational methods for children with ASD through the assessment of their behavioral, verbal, and physical responses. Again, the proposed approach performs well with 99% accuracy, F1 score, recall, and precision. In this research, cross-validation technique is also implemented to check the stability of the proposed model along with the comparison of previously published research works to show the significance of the proposed model. This study aims to develop personalized educational strategies for individuals with ASD using machine learning techniques to meet their specific needs better.more » « less
-
Abstract ObjectiveData extraction from the published literature is the most laborious step in conducting living systematic reviews (LSRs). We aim to build a generalizable, automated data extraction workflow leveraging large language models (LLMs) that mimics the real-world 2-reviewer process. Materials and MethodsA dataset of 10 trials (22 publications) from a published LSR was used, focusing on 23 variables related to trial, population, and outcomes data. The dataset was split into prompt development (n = 5) and held-out test sets (n = 17). GPT-4-turbo and Claude-3-Opus were used for data extraction. Responses from the 2 LLMs were considered concordant if they were the same for a given variable. The discordant responses from each LLM were provided to the other LLM for cross-critique. Accuracy, ie, the total number of correct responses divided by the total number of responses, was computed to assess performance. ResultsIn the prompt development set, 110 (96%) responses were concordant, achieving an accuracy of 0.99 against the gold standard. In the test set, 342 (87%) responses were concordant. The accuracy of the concordant responses was 0.94. The accuracy of the discordant responses was 0.41 for GPT-4-turbo and 0.50 for Claude-3-Opus. Of the 49 discordant responses, 25 (51%) became concordant after cross-critique, increasing accuracy to 0.76. DiscussionConcordant responses by the LLMs are likely to be accurate. In instances of discordant responses, cross-critique can further increase the accuracy. ConclusionLarge language models, when simulated in a collaborative, 2-reviewer workflow, can extract data with reasonable performance, enabling truly “living” systematic reviews.more » « lessFree, publicly-accessible full text available January 21, 2026
-
FSS (Few-shot segmentation) aims to segment a target class using a small number of labeled images (support set). To extract information relevant to the target class, a dominant approach in best performing FSS methods removes background features using a support mask. We observe that this feature excision through a limiting support mask introduces an information bottleneck in several challenging FSS cases, e.g., for small targets and/or inaccurate target boundaries. To this end, we present a novel method (MSI), which maximizes the support-set information by exploiting two complementary sources of features to generate super correlation maps. We validate the effectiveness of our approach by instantiating it into three recent and strong FSS methods. Experimental results on several publicly available FSS benchmarks show that our proposed method consistently improves performance by visible margins and leads to faster convergence. Our code and trained models are available at: https://github.com/moonsh/MSI-Maximize-Support-Set-Informationmore » « less
-
Robotics and automation are still considered a novelty in the U.S. construction industry, as compared to manufacturing, despite its proven advantages for production. Due to the continuing advancement of technology needed, there are limited applications of robotics in construction to date. To better identify the potential tasks that would benefit from the use of robotics on construction sites, we consider methods for assessing the craft labor tasks that occur in construction. In this paper, we decompose construction tasks of an observed activity of installation of stone veneer system and compared two systems of categorizing the construction tasks based on value added assessment and lean (waste) assessment of tasks. The analysis compares the two categorization systems using a matrix which highlights consistency in the alignment of value adding tasks, such as final placement, as well as ineffective tasks with type two muda, but discrepancies emerge regarding the idea of contributory tasks related to logistical support of construction activities. The focus of the discussion is derived from the intersection of contributory tasks with type one muda tasks. The contributory tasks offer an opportunity to reduce the use of craft labor for wasteful tasks elimination by leveraging automation and robotics.more » « less
-
We use estimates of time preferences to customize incentives for polio vaccinators in Lahore, Pakistan. We measure time preferences using intertemporal allocations of effort, and use these estimates to construct individually tailored incentives. We evaluate the effect of matching contract terms to discounting parameters in a subsequent experiment with the same vaccinators. Our tailored policy is compared with alternatives that either rely on atheoretic reduced-form relationships for policy guidance or apply the same policy to all individuals. We find that contracts tailored to individual discounting outperform this range of policy alternatives.more » « less
An official website of the United States government

Full Text Available